专利摘要:
A method and apparatus for generating a stream from an object image (s), comprising: obtaining data associated with points from a point cloud representing at least a portion of the object; obtain a parametric surface according to at least one geometric feature associated with at least a part of the object and position information of an acquisition device used to acquire at least one image; obtain a height map and one or more texture maps associated with the parametric surface; flow generation by combining a first syntax element relative to at least one parameter, a second height map relative syntax element, a third syntax element relative to at least one texture map, and a fourth syntax element relative to a position of the acquisition device. The invention further relates to a method and device for rendering an image of the object from the flow thus obtained.
公开号:BR102017010904A2
申请号:R102017010904-6
申请日:2017-05-24
公开日:2018-05-15
发明作者:Fleureau Julien;Tapie Thierry;Thudor Franck
申请人:Thomson Licensing;
IPC主号:
专利说明:

“METHOD, APPLIANCE AND FLOW FOR IMMERSIVE VIDEO FORMAT” TECHNICAL FIELD
[001] The present invention relates to the domain of immersive video content. The present invention is also understood in the context of formatting data representative of immersive content, for example, for rendering on end-user devices, such as mobile devices or head-mounted display devices (HMD - Head-Mounted Displays).
BACKGROUND
[002] This section is intended to introduce the reader to various aspects of the technique, which may be related to various aspects of the present invention which are described and / or claimed below. This discussion is believed to be useful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Therefore, it should be understood that these statements should be read in this light, and not as admissions to the prior art.
[003] The visualization systems, such as a head mounted visualization device (HMD) or CAVE, allow the user to browse immersive video content. The immersive video content can be obtained with CGI (Computer Generated Images) techniques. With such immersive video content, it is possible to compute the content according to the point of view of the user who is watching, but with unrealistic graphic quality. Immersive video content can be obtained by mapping a video (for example, a video acquired by multiple cameras) on a surface, such as a sphere or a cube. Such immersive video content provides good image quality, but issues related to parallax arise, especially for objects in the foreground scene, that is, close to the cameras.
[004] In the context of immersive video content, free point of view video (FVV) is a technique for representing and encoding multivista video and later re-rendering of arbitrary points of view. When increasing the user experience in an immersive context, the amount of data to be transported to the renderer is very important and can be an issue.
SUMMARY
[005] References in the specification to “a modality”, “an exemplary modality”, “a particular modality” indicate that the described modality may include a particular characteristic, structure or resource, but each modality may not necessarily include the characteristic, structure or particular resource. Furthermore, such phrases do not necessarily refer to the same modality. Additionally, when a particular feature, structure or feature is described in connection with a modality, it is understood that it is within the knowledge of the person skilled in the art to affect that feature, structure or feature in connection with other modalities whether or not explicitly described.
[006] The present invention relates to a method of generating a flow from at least one image of an object in a scene, the method comprising: - obtaining data associated with points in a point cloud representing at least one part of the object; - obtaining at least one parameter representative of a parametric surface according to at least one geometric characteristic associated with at least part of the object and position information of an acquisition device used to acquire at least one image, at least one geometric feature being obtained from a surface associated with points of the point cloud associated with at least part of the object; - obtaining a height map associated with the parametric surface from the data, the height map comprising information representative of the distance between at least part of the object and the parametric surface; - obtain at least one texture map associated with the parametric surface of the data; - flow generation by combining a first syntax element relative to at least one parameter, a second syntax element relative to the height map, a third syntax element relative to at least one texture map and a fourth relative syntax element to a position of the acquisition device.
[007] According to a particular characteristic, the at least one parameter varies in time according to a deformation of at least part of the object.
[008] According to a specific characteristic, the data comprises texture information and information representative of depth.
[009] The present invention also relates to a device configured to implement the method of generating a flow described above from at least one image of an object in a scene.
[010] The present invention also relates to a flow carrying data representative of an object in a scene, where the data comprises: - a first element of syntax relative to at least one parameter representative of a parametric surface obtained according to at least least one geometric feature associated with at least part of the object and position information from an acquisition device used to acquire at least one image, said at least one geometric feature being obtained from a surface associated with points in the point cloud associated with at least a part of the object; - a second syntax element relative to a height map obtained from second data associated with points in a point cloud representing at least part of the object, the height map comprising information representative of the distance between at least one part of the object and the parametric surface; - a third element of syntax related to at least one texture map obtained from the second data; and - a fourth element of syntax relating to a position of the acquisition device.
[011] According to a particular characteristic, the first syntax element varies over time according to a change in at least one parameter that varies according to a deformation of at least part of the object.
[012] According to a specific characteristic, the second data comprises texture information and information representing depth.
[013] The present invention also relates to a method of rendering an image of at least part of an object from a stream carrying data representative of that object, the method comprising: - obtaining at least one parameter representative of a parametric surface from a first flow syntax element; - obtaining a height map from a second flow syntax element, the height map comprising information representative of the distance between the at least part of the object and the parametric surface; - obtain at least one texture map from a third flow syntax element; - obtain data associated with points in a point cloud representing at least a part of the object of the parametric surface, the height map and at least one texture map; - rendering the image based on data and information representing a position of an acquisition device from a fourth element of flow syntax.
[014] According to a particular characteristic, the data comprises texture information and information representative of depth.
[015] According to a specific characteristic, the rendering comprises the rendering by points (splat rendering) of the referred data.
[016] The present invention also relates to a device configured to implement the aforementioned rendering method of an image of at least part of an object from a stream carrying data representative of said object.
[017] The present invention also relates to a computer program product comprising program code instructions for performing the steps of the method of rendering an image of at least part of an object from a stream carrying data representative of the said object, when this program is run on a computer.
[018] The present invention also relates to a computer program product comprising program code instructions for performing the steps of the method of generating a stream from at least one image of an object in a scene.
[019] The present invention also relates to a processor-readable (non-transitory) medium having instructions stored in it to make a processor perform at least the method of generating the aforementioned flow from at least one image of a object of a scene.
[020] The present invention also relates to a processor-readable (non-transitory) medium having instructions stored in it to make a processor perform at least the above-mentioned method of rendering an image of at least part of an object to be from a stream carrying data representative of that object, when this program is run on a computer.
LIST OF FIGURES
[021] The present invention will be better understood and other specific characteristics and advantages will appear when reading the description below, the description referring to the attached drawings, in which: [022] Figure 1 illustrates an immersive content, according to a particular modality of the present principles;
[023] Figures 2A and 2B illustrate a light field acquisition device configured to acquire images of a scene to obtain at least part of the immersive content of Figure 1, according to a particular embodiment of the present principles;
[024] Figure 3 illustrates representations of a part of an object in the scene acquired with the acquisition device of Figures 2A and 2B, according to a particular modality of the present principles;
[025] Figure 4 illustrates a parametric surface used in a process to represent the object of Figure 3, according to a particular modality of the present principles;
[026] Figures 5A, 5B and 5C illustrate exemplary modalities of sampling the parametric surface of Figure 4;
[027] Figure 6 illustrates the correspondence of the parametric surface of Figure 4 with respect to a deformation of the object of Figure 3, according to a particular modality of the present principles;
[028] Figure 7 illustrates the association of texture information with the parametric surface of Figure 4, according to a first particular modality of the present principles;
[029] Figure 8 illustrates the association of texture information with the parametric surface of Figure 4, according to a second particular modality of the present principles;
[030] Figure 9 illustrates an example of a device architecture configured to implement the method (s) of Figure 12 and / or Figure 13, according to an example of the present principles;
[031] Figure 10 illustrates two remote devices in Figure 9 that communicate through a communication network, according to an example of the present principles;
[032] Figure 11 illustrates the syntax of a signal carrying a description of the object in Figure 3, according to an example of the present principles;
[033] Figure 12 illustrates a method of generating a data flow that describes the object of Figure 3, according to an example of the present principles;
[034] Figure 13 illustrates a method of rendering an image of the object in Figure 3, according to an example of the present principles.
DETAILED DESCRIPTION OF THE MODALITIES
[035] The subject is now described with reference to the drawings, where similar reference numbers are used to refer to similar elements throughout the document. In the following description, for the purpose of explanation, several specific details are presented in order to provide a complete understanding of the subject. It may be evident, however, that the modalities of the subject in question can be practiced without these specific details.
[036] The present description illustrates the principles of the present invention. It will thus be understood that those skilled in the art will be able to conceive various arrangements that, although not explicitly described here, incorporate the principles of the invention.
[037] The present principles will be described in reference to a particular modality of a method of generating a data flow representative of an object of a scene and / or a method of rendering one or more images of this object from the flow of generated data. A point cloud representing the object (or part of it) is determined from one or more images of the object (or part of it) acquired with one or more acquisition devices. A parametric surface is calculated as a basis for the representation of the object (or part of it), the parametric surface being calculated using the object's geometric characteristic (for example, extreme points of the point cloud and / or normal information associated with outer surface elements of the object obtained from the point cloud) and the position information of the acquisition device (s) (for example, to orient the parametric surface). A height map and one or more texture maps are determined and associated with the parametric surface. A data stream is generated by combining and / or encoding information representative of the parametric surface (ie, parameters) with height information from the height map, texture information from the texture map (s) with position information from ( s) acquisition device (s). On the rendered / decoder side, an image of the object (or part of it) can be obtained by decoding / extracting information representative of the parametric surface and associated height and texture maps.
[038] The use of a parametric surface as a reference to represent the object with texture and height information associated with samples of the parametric surface allows to reduce the amount of data needed to represent the object in comparison to a representation with a point cloud.
[039] Figure 1 illustrates an example of an immersive content 10, in the non-limiting exemplary form of 4π stereoradian video content, according to a particular non-limiting modality of the present principles. Figure 1 corresponds to a flat representation of the immersive content 10. The immersive content 10 corresponds, for example, to a real scene acquired with one or more cameras or to a mixed reality scene comprising virtual and real objects, the virtual object being, for example, synthesized using a 3D engine. A portion 11 of the immersive content 10 corresponds, for example, to the portion of the immersive content displayed on a display device adapted to view immersive content, the size of the portion 11 being, for example, equal to the field of view provided by the display device.
[040] The display device used to view immersive content 10 is, for example, an HMD (Head-Mounted Display), worn on a user's head or as part of a helmet. The HMD advantageously comprises one or more display screens (for example, LCD (Liquid Crystal Display), OLED (Organic Light Emitting Diode) or LCOS (Liquid Crystal on Silicon)) and sensor (s) configured to measure the change (s) of position of the HMD, for example, gyroscopes or an IMU (Inertial Measurement Unit), according to one, two or three axes of the real world (steering axis, longitudinal inclination and / or inclination transversal). The part 11 of the immersive content 10 corresponding to the measured position of the HMD is advantageously determined with a specific function that establishes the relationship between the viewpoint associated with the HMD in the real world and the viewpoint of a virtual camera associated with the immersive content 10. Controlling part 11 of the video content to be displayed on the HMD display screen (s) according to the measured position of the HMD allows a user using the HMD to browse immersive content, which is larger than the field of view associated with the HMD display screen (s). For example, if the field of view offered by the HMD is equal to 110 ° (for example, over the steering axis) and if the immersive content offers 180 ° content, the user using the HMD may turn his head to right or left to view parts of the video content outside the field of view offered by HMD. According to another example, the immersive system is a CAVE system (Digital Cave), in which the immersive content is projected on the walls of a room. CAVE walls are, for example, made up of rear projection screens or flat screens. The user can, thus, navigate his gaze on the different walls of the room. The CAVE system is advantageously supplied with cameras that acquire images from the user to determine by video processing of these images the direction of the user's gaze. According to a variant, the user's look or position is determined with a tracking system, for example, an infrared tracking system, the user using infrared sensors. According to another variant, the immersive system is a tablet with a touch screen, the user navigating the content by scrolling the content with one or more fingers sliding over the touch screen.
[041] The immersive content 10 and part 11 can also comprise object (s) in the foreground and object (s) in the background.
[042] Of course, immersive content 10 is not limited to 4tt stereotypical video content, but extends to any video content (or audiovisual content) larger than the field of view 11.0 immersive content can be, for example, a 2tt, 2.5 π, 3π stereoradian content and so on.
[043] Figures 2A and 2B illustrate an example of a light field acquisition device. More specifically, Figures 2A and 2B each show an array of cameras 2A, 2B (also called multicamera arrays), according to two particular modalities of the present principles.
[044] The camera array 2A comprises an array 20 of lenses or microlenses comprising several microlenses 201, 202 to 20p, p being an integer corresponding to the number of microlenses, and one or more sensor arrays 21. The camera array 2A does not include a main lens. The lens array 20 may be a small device, which is commonly called a microlens array. The array of cameras with a single sensor can be considered as a special case of a plenoptic camera, in which the main lens has an infinite focal length. According to a particular arrangement in which the number of photosensors is equal to the number of microlenses, that is, a photosensor is optically associated with a microlens, the camera array 20 can be seen as an arrangement of a plurality of individual cameras (for example, example, microcameras) closely spaced, such as a square arrangement (as shown in Figure 2A) or a quincount arrangement, for example.
[045] The camera array 2B corresponds to individual camera equipment, each comprising a lens and a photosensor array. The cameras are spaced, for example, at a distance of a few centimeters or less than 5, 7 or 10 cm.
[046] The light field data (forming a so-called light field image) obtained with such a matrix of cameras 2A or 2B correspond to the plurality of views of the scene, that is, the final views that can be obtained by demultiplexing and demosaication of a raw image obtained with a plenoptic camera, such as the type 1.0 plenoptic camera, corresponding to a plenoptic camera in which the distance between the lens array and the photosensor array is equal to the focal length of microlenses, or the type 2.0 (also called a focused plenoptic camera). The cameras in the 2B camera array are calibrated according to any known method, that is, the extrinsic and intrinsic parameters of the cameras are known.
[047] The different views obtained with the light field light acquisition device allow to obtain an immersive content or at least a part of the immersive content. Naturally, immersive content can be obtained with an acquisition device other than a light field acquisition device, for example, with a camera associated with a depth sensor (for example, an infrared emitter / receiver, such as Kinect Microsoft or with a laser emitter).
[048] Figure 3 illustrates two different representations of an object, or part of it, of the scene represented with the immersive content. According to the example in Figure 3, the object is a person, for example, moving within the scene, and a part of the object corresponding to the head is illustrated in Figure 3.
[049] A first representation 30 of the object part is a point cloud. The point cloud corresponds to a large collection of points representing the object, for example, the external surface or the external shape of the object. A point cloud can be seen as a vector-based structure, where each point has its coordinates (for example, three-dimensional XYZ coordinates or a depth / distance from a given point of view) and one or more attributes, also called a component . An example of a component is the color component that can be expressed in different color spaces, for example RGB (Red, Green and Blue) or YUV (Y being the luma and UV component, two chrominance components). The point cloud is a representation of the object as seen from a given point of view, or a range of points of view. The point cloud can be obtained in different ways, for example: from and a capture of a real object triggered by camera equipment, such as the camera array in Figure 2, optionally complemented by an active depth detection device; from a capture of a virtual / synthetic object triggered by virtual camera equipment in a modeling tool; from a mix of virtual and real objects.
[050] In the first case (from the capture of a real object), the set of cameras will generate a set of images or image sequences (videos) corresponding to the different views (different points of view). Depth information - which means the distance from each camera center to the object's surface - is obtained through the active depth detection device, for example, in the infrared range and based on structured light analysis or time of flight, or based on disparity algorithms. In both cases, all cameras need to be calibrated, intrinsically and extrinsically. The disparity algorithms consist of a search for similar visual characteristics in a pair of rectified camera images, typically to be taken along a 1-dimension line: the greater the pixel column difference, the closer the surface of this characteristic is. In the case of a camera array, the global depth information can be obtained by combining a plurality of pair disparity information, taking advantage of the plurality of pairs of cameras, thus improving the signal-to-noise ratio.
[051] In the second case (synthetic object), the modeling tool directly provides depth information.
[052] A second representation 31 of the object part can be obtained from the point cloud representation 30, the second representation corresponding to a surface representation. The point cloud can be processed to calculate its surface. For this purpose, for a given point in the point cloud, the neighboring points of this given point are used to calculate the normal for the local surface at this given point, the surface element associated with this given point being derived from the normal. The process is repeated for all points to obtain the surface. Methods for reconstructing the surface from a point cloud are, for example, described by Matthew Berger et al. in “State of the Art in Surface Reconstruction from Point Clouds”, State of the Art Report, 2014. According to a variant, the surface element associated with a given point in the point cloud is obtained by applying point rendering to this data Score. The surface of the object (also called the implicit surface or the external surface of the object) is obtained by mixing all the points (for example, ellipsoids) associated with the points in the point cloud.
[053] In a particular modality, the point cloud represents only partial views of the object, and not the object in its entirety, and this corresponds to the way the object is supposedly seen on a rendering side, for example, in a scene cinematographic. For example, shooting a character in front of a flat array of cameras generates a cloud of points on the side of the equipment only. The back of the character does not even exist, the object is not closed on itself, and the geometric characteristics of this object are, therefore, the set of all surfaces oriented towards the equipment (the angle between the normal of each local surface and the radius back to the acquisition device is, for example, less than 180 °).
[054] Figure 4 illustrates a surface 44 used to represent object 43, according to a non-limiting modality of the present principles. Surface 44 is a parametric surface, that is, a surface defined with parameters and defined by a parametric equation.
[055] An example of a possible parametric surface is given by a cylinder, as illustrated in Figure 4 (for clarity, only one dimension is illustrated, but the surface can be defined in 2 or 3 dimensions). A parametric surface can have any shape, for example, a square, a rectangle or more complex shapes, as long as the surface can be defined with a parametric equation, that is, with a limited number of parameters. Object 43 (which may correspond to the object in Figure 3) is acquired with 3 acquisition devices 40, 41 and 42, for example, 3 RGB cameras. A different point of view is associated with each acquisition device 40, 41.42. A projection of the surface of object 43 onto a flat cylindrical surface 45 corresponds to the mapping / projection of parametric surface 44 onto a rectangle. The color information and depth information associated with the points of the object 43 and acquired and / or calculated from the images obtained with the acquisition devices 40, 41, 42 are associated with corresponding points on the flat cylindrical surface 45, that is, color + height information is associated with each point / pixel defined by a row index and a column index. The height and color information associated with part 450 of surface 45 is obtained from the view of the acquisition device 40; the color and height information associated with parts 451 of surface 45 is obtained from the view associated with the acquisition device 41; and the color and height information associated with parts 452 of surface 45 is obtained from the view associated with the acquisition device 42.
[056] Ellipsoid 46 illustrates part of the surface 45, the circular points corresponding to the projection of the points of the representation of the point cloud of object 43 onto the parametric surface 44 or its flat representation 45. The sampling of the parametric surface 44 may be different sampling resulting from the point cloud. A sampling of the parametric surface is represented with the cross “+” on the ellipsoid 46, the sampling of the parametric surface being described with a limited number of parameters. The sampling of the parametric surface 44 can be uniform and non-uniform as illustrated in the exemplary modalities of Figures 5A, 5B and 5C.
[057] In the example of Figure 5A, the sampling 50 of the parametric surface is uniform, that is, the columns of sample points are arranged at the same distance from each other, that is, with a distance “a”, the same applying to the lines.
[058] In the example in Figure 5B, the sampling 51 of the parametric surface is non-uniform, that is, the columns of sample points are arranged at different distances from each other, that is, the first two columns (starting on the left side) they are spaced a distance "a", the distance between two columns, then "a + b", then "a + 2b", then "a + 3b" and so on. In the example in Figure 5B, the lines are spaced apart by the same distance.
[059] In the examples in Figures 5A and 5B, the direction associated with the height information associated with each sample is orthogonal to the parametric surface. In the example in Figure 5C, the direction associated with the height information associated with samples from sampling 53 varies from one sample to another with a variable angle θ0 + q * Δθ, where Θ0 is a starting angle and q is an integer varying from 0 to a maximum value N, Δθ corresponding to the variation of angle between two consecutive samples.
[060] The density of the sampling on the parametric surface is, for example, adjusted according to: the sampling of the object, that is, of the point cloud; and / or the expected rendering quality.
[061] For example, the more the object, the less dense the camera sampling will be, and the less dense the sampling on the parametric surface can be.
[062] The value to be associated with the parametric surface samples are: geometric information, namely the distance between the parametric surface and the implicit surface of the object; color information. In the simplest form, a composite color value can be calculated in the different views for the surface area of the object that corresponds to each sample of the parametric surface, leading, for example, to an average diffuse color (that is, the average of the information color of the points in the point cloud that can be associated with a parametric surface sample).
[063] The height information associated with the parametric surface samples can be stored on a height map having as many samples as the parametric surface. The color information associated with the parametric surface samples can be stored on a texture map with as many samples as the parametric surface.
[064] The height information to be associated with a given sample can be obtained by casting a ray from that sample (either orthogonal to the parametric surface or not, depending on the sample as explained in relation to Figures 5A, 5B and 5C ), the height being determined from the distance that separates the sample of points from the point cloud belonging to the area of the point cloud associated with the intersection between the radius and the object surface obtained from the point cloud. When several points belong to the area, the distance can be the average of the distances that separate the sample from the plurality of points in the area. The parametric surface and the point cloud being defined in world space in relation to the acquisition device, the distance between a sample of the parametric surface and a point on the external surface of the object is obtained as the Euclidean distance.
[065] Likewise, the texture information to be associated with a given sample can be obtained by launching a ray from that sample. The texture information is obtained from the texture / color information of the points in the point cloud (for example, the mean) belonging to the area corresponding to the intersection between the radius and the object's surface. In another modality, when an analytical representation of the parametric surface is known (that is, its geometry and normal), the cloud of points can be directly splashed (using the associated information of normal and size) on the parametric surface, for example, by making use of an iterative Newton scheme. In this case, the texture information is obtained by mixing the splashes.
[066] In a variant, a plurality of parametric surfaces can be associated with the same object. The object can be segmented into a plurality of parts and a different parametric surface can be associated with each part, the parametric surface associated with a given part being determined according to the specific geometry of the part and according to the position information of the device acquisition used to acquire the part. According to this variant, a height map and one or more texture maps are associated with each parametric surface. For example, if the object is a person, a first parametric surface can be associated with one leg, a second parametric surface can be associated with the other leg, a third parametric surface can be associated with the arm, a fourth parametric surface can be associated with associated with the other arm, a fifth parametric surface can be associated with the trunk and a sixth parametric surface can be associated with the head.
[067] As an option, additional textures can be added to register computational by-products of the MLS (Moving Least Squares) surface that are necessary to render, but are time consuming. The example may be the texture of the normal vector, for example, in a mode equivalent to the normal CGI map, or point geometry, such as small and large axis and size directions. The restriction of these additional textures is that they must exhibit good characteristics of spatial and temporal coherence to fit well with the compression engine. When all necessary information is transmitted, the MLS core parameters are no longer useful for transmission.
[068] In a specific embodiment illustrated in Figure 7, a plurality of texture maps can be associated with one or more parametric surfaces. Figure 7 illustrates the generation of 2 parametric surfaces 71.72 for part 70 of the object, part 70 emitting, for example, different colors according to different angles. In this case, the angular propagation information of the colors can be recorded and transported, as well as other texture information, in order to make this correctly on the client side (for example, interpolating between the 2 colors according to the framing direction) . According to a variant, a single parametric surface can be generated in place of the 2 parametric surfaces 70, 71, the different texture maps being associated with the single parametric surface.
[069] In a specific modality illustrated in Figure 8, a plurality of parametric surfaces can be generated for the same part of the object. For example, a first parametric surface can be calculated for (and associated with) a person's face 81. A second parametric surface can be calculated for (and associated with) a part of the face 81, that is, the part 82 comprising the eyes. A first height map and a first texture map can be associated with the first texture map, allowing you to represent the face with a first level of detail 83, for example. A second height map and a second texture map can be associated with the second texture map, allowing to represent part 82 of the face with a second level of detail 84, for example. To achieve this goal, a first definition is associated with the first parametric surface and a second definition (greater than the first definition) is associated with the second parametric surface. To make the second texture visible when rendering the face, an offset value is subtracted from the height values calculated for generating the second height map. The height value stores in the second height map are then lower than the actual calculated height values that separate the second parametric surface from the external face surface. When rendering the face, the second texture information will be positioned in front of the first texture information, with respect to the rendered point of view.
[070] Figure 6 illustrates the correspondence of the parametric surface with respect to a deformation of the object, according to a particular and non-limiting modality of the present principles. The left side part of Figure 6 illustrates the parametric surface 604 associated with object 600 obtained at time t (or for a first frame A of a video) and the right side part of Figure 6 illustrates a parametric surface 605 associated with the object 601 (corresponding to object 600 but with a different external shape, that is, object 601 corresponds to a deformed version of object 600) obtained at time t + 1 (or for a second frame B of a video temporally following the first frame THE). The object 600, 601 is acquired with a set of cameras 60, 61, 62, for example, corresponding to the acquisition devices 40, 41, 42 of Figure 4. The upper part of Figure 6 corresponds to a top view of the user and the cameras and the bottom part of Figure 6 corresponds, for example, to a front view of the user and the cameras, the cameras being illustrated with black discs at the bottom.
[071] In order to better adhere to the object, the partial cylinder 604, 605 corresponding to the parametric surface surrounds the object 600, 601, respectively, partially close to the object 600, 601, respectively, on the side of the camera equipment 60, 61 , 62 (which is typically static). The coordinates of the parametric surface 600, 601 can be obtained by calculating a bounding box 602, 603 surrounding the object 600, 601, respectively, the bounding box 602, 603 being defined by each of the extreme coordinates (x, y, z) of the point cloud). The representative parameters of the parametric surface 604, 605 (for example, height, radius, central position for a parametric surface of the cylindrical shape) are determined to be those capable of encompassing the bounding box, the parametric surface 604, 605 being opened in the direction of the view of the cameras. This example illustrates that the parametric surface depends on both the object (in motion) and the location of the camera equipment.
[072] When the object 600, 601 captured by cameras 60, 61.62 moves from time t to time t + 1, the point cloud used to represent the object also changes: the topology (or the geometric characteristics of the object ) varies according, for example, to the movement of the object (or according to the deformation applied to the object), for example, the width and / or height of the object changes). Therefore, it is relevant to adjust the topology of the parametric surface that is used to represent the object with the associated height map and the texture map (s) that record and / or transmit all related geometric and / or texture information to the point cloud for each video frame. The following restrictions can be applied: the projection of the point cloud onto the parametric surface can form video images with good spatial and temporal consistency so that it can be efficiently compressed by a regular compression engine, for example, based on standards such as H264 / MPEG4 or H265 / HEVC or any other standards, which means that the surface is allowed to develop smoothly, without balconies; and / or the parametric surface can be placed in relation to the point cloud to maximize the parts of the parametric surface to be covered by the projection of the point cloud and minimize its distance to the point cloud, thus preserving the quality of the final measured image, for example, by a PSNR metric. More precisely, the parametric surface is chosen in such a way that: enjoy the greatest benefit of your image resolution (width x height); and / or optimize the number of useful bits to encode the depth.
[073] The evolution / change of the parametric surface in each frame can be easily recorded, transported as metadata and retrieved on the renderer / decoder side, which means that the parametric surface can be expressed in a limited number of parameters.
[074] Figure 12 illustrates a method for generating a flow comprising data representative of an object of a scene implemented, for example, in a device 9 (described in relation to Figure 9), according to a non-restrictive modality of those present Principles.
[075] In a step 1200, the different parameters of device 9 are updated. In particular, the data associated with the object's representation is initialized anyway.
[076] In a 1201 step, the data associated with the points of a point cloud representing a part of the object, or the object as a whole, is obtained. Data is, for example, received from a memory device, such as the local memory of the device or a remote storage device, such as a server (for example, over a network, such as the Internet or a Network of Local Area). According to another example, data is received from one or more acquisition devices used to acquire one or more views of the scene comprising the object. The data comprises, for example, texture information (for example, color information) and distance information (for example, depth of height corresponding to the distance between the point considered and the point of view associated with the point considered, that is, the point of view of the acquisition device used to acquire the considered point).
[077] In step 1202, one or more parameters representative of a parametric surface are obtained. The parametric surface is associated with the part of the object (or the entire object) represented with the point cloud. A general expression of an exemplary parametric surface is as follows: X = f 1 (t1, t2) Y = f2 (t1, t2) Z = f3 (t1, t2) with the x, y, z coordinates in 3 dimensions, functions f1, f2, f3 and parameters t1, t2. The parameters of the parametric surface are obtained according to the geometric characteristic (s) of the external surface associated with the point cloud and from position information of one or more acquisition devices used to obtain the points of the point cloud. points. To determine the parametric surface to be associated with the considered part of the object, the coordinates of the extreme points of the point cloud can, for example, be determined from the coordinates associated with the points. The extreme points correspond to the points with the minimum or maximum value for at least one of the dimensions of the space in which the coordinates are expressed. A bounding box that surrounds the point cloud is obtained from the extreme points. The parametric surface can be obtained as the cylinder with the center of the rear face of the bounding box in the center and passing through the front edges of the bounding box, the reference being the acquisition device. The orientation of the parametric surface is thus determined using the position information of the acquisition device.
[078] According to a variant, the normal vectors associated with the outer surface of the part of the object are calculated from the point cloud. The variation in the orientation of normal vectors can be used to determine the parametric surface so that the parametric surface is close to the shape variation of the outer surface.
[079] In a step 1203, a height map associated with the parametric surface obtained in step 1202 is obtained, that is, determined or calculated. For each sample of the parametric surface, a height value is calculated by casting a radius (for example, orthogonal to the parametric surface in the sample under consideration). The height value to be associated with the sample considered corresponds to the distance between the sample considered and the outer surface element of the part of the object point (corresponding to the intersection between the radius and the outer surface). The coordinates associated with the outer surface element are, for example, obtained from the points of the point cloud used to generate this outer surface element. A height value can be calculated for each sample of the parametric surface to obtain the height map, the height map corresponding, for example, to a two-dimensional map (or image) storing a height value for each sample of the map, the number map samples corresponding to the number of samples in the parametric surface sampling.
[080] In a step 1204, a texture map associated with the parametric surface obtained in step 1202 is obtained, that is, determined or calculated. The texture map corresponds, for example, to a two-dimensional map (or image) storing texture information (for example, color information) for each sample on the map, the number of samples on the texture map corresponding to the number of samples on the map. parametric surface sampling. A texture information associated with a sample considered from the parametric surface is determined by the launch of a ray, for example, orthogonal to the parametric surface in the sample considered. The texture information for storage on the texture map corresponds to the texture information associated with the outer surface element of the object part crossed by the radius. The texture information associated with the surface element is obtained from the texture information of the points in the point cloud used to obtain this surface element. In a variant, several texture maps can be obtained for the parametric surface.
[081] In a step 1205, a data stream 1100 comprising data representative of the part of the object is obtained by combining the parameters obtained in step 1202, the height information obtained in step 1203 and the texture information obtained in step 1204. One example of the structure of such a flow 1100 being described in relation to Figure 11. A representation of the part of the object in the form of a parametric surface associated with a height map and one or more texture maps has the advantage of reducing the amount of data needed to represent the part of the object compared to a representation with a point cloud. Additional information representative of the position of the acquisition device (s) used to obtain the point cloud can be added to the flow. This additional information has the advantage of restricting the rendering of the object part on a rendering device to the limit of the range of views of the acquisition of the object part, thus avoiding the rendering of artifacts that can occur when trying to render the part of the object. object from the data flow according to a point of view that does not correspond to the range of points of view used to obtain the point cloud that is the basis for representing the part of the object included in the flow.
[082] In an optional step, the data stream is transmitted to an encoder and received by a decoder or renderer to render or display the purpose of the object part.
[083] In a variant, the flow data changes over time, for example, from frame to frame, for example, when the shape of the outer surface of part of the object varies over time. When the outer surface changes, the parameters of the parametric surface are updated with the height and texture maps to represent the shape change of the part of the object.
[084] In another variant, several parametric surfaces can be used to represent the same part of the object, for example, according to different sampling resolutions.
[085] A single parametric surface can be used to represent the object as a whole or different parametric surfaces can be used to represent the object as a whole, for example, a different parametric surface being determined to represent each different part of the object. In such a variant, the data flow is obtained by combining the different parametric surfaces and associated texture and height maps.
[086] According to another variant, a flat video (ie 2D video) representative of the object's background is added to the stream, for example, in a media container, such as mp4 or mkv.
[087] Figure 13 illustrates a method of rendering an image representative of at least part of the object from the flow obtained with the method of Figure 12. The rendering method is, for example, implemented in a device 9 (described in relation to Figure 9), according to a non-restrictive modality of the present principles.
[088] In a 1300 step, the different parameters of device 9 are updated. In particular, the data associated with the representation of at least part of the object is initialized in any way.
[089] In a step 1301, one or more parameters representative of a parametric surface are obtained from data flow 1100, an example of the structure of such flow being described with reference to Figure 11.0 another or more parameters correspond, for example, the parameters obtained in step 1202.
[090] In a step 1302, a height map associated with the parametric surface obtained in step 1301 is obtained from flow 1100. The height map corresponds, for example, to the height map obtained in step 1203.
[091] In a step 1303, one or more texture maps associated with the parametric surface obtained in step 1301 is or are obtained from flow 1100. The texture maps correspond, for example, to the texture maps obtained in step 1204 .
[092] In a step 1304, the data associated with points in a point cloud are obtained from the parametric surface obtained in step 1301, the height map obtained in step 1302 and the texture map obtained in step 1303. The points they are obtained by deprojection of the samples from the parametric surface, the coordinates of the points being obtained from the coordinates of the samples and the height information associated with the samples, the texture information of the points being obtained from the texture information associated with the samples.
[093] In a step 1305, an image of the part of the object represented with a parametric surface, height map and texture maps is rendered from a point of view that is restricted by the position information comprised in flow 1100. The surface external part of the object can, for example, be obtained by applying a point rendering technique to the points of the obtained point cloud. In a variant, a sequence of images is rendered when the stream comprises information representative of the object or part of it for a sequence of frames (that is, images).
[094] Figure 9 illustrates an architectural example of a device 9 that can be configured to implement a method described in relation to Figures 12 and / or 13.
[095] Device 9 comprises the following elements that are linked together by an address and data bus 91: a microprocessor 92 (or CPU), which is, for example, a DSP (or Digital Signal Processor); a ROM (or Read-Only Memory) 93; a RAM (or Random Access Memory) 94; a storage interface 95; an l / O 96 interface for receiving data to be transmitted from an application; and a power supply, for example, a battery.
[096] According to an example, the power supply is external to the device. In each memory mentioned, the word «register» used in the specification can correspond to a small capacity area (some bits) or a very large area (for example, an entire program or a large amount of data received or decoded). ROM 93 comprises at least one program and parameter. ROM 93 can store algorithms and instructions for executing techniques in accordance with these principles. When turned on, CPU 92 loads the program into RAM and executes the corresponding instructions.
[097] RAM 94 comprises, in a register, the program executed by CPU 92 loaded after switching on device 9, input data in a register, intermediate data in different method states in a register, and other variables used for the execution of the method in a record.
[098] The implementations described here can be implemented, for example, in a method or a process, an apparatus, a computer program product, a data stream or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of discussed features can also be implemented in other ways (for example, a program). A device can be implemented, for example, in appropriate hardware, software and firmware. The methods can be implemented, for example, in an apparatus, such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit or a logic device programmable. Processors also include communication devices, such as, for example, computers, cell phones, portable / personal digital assistants (“PDAs”) and other devices that facilitate the communication of information between end users.
[099] According to an encoding example or an encoder, the first, second, third and / or fourth syntax elements are obtained from a source. For example, the source belongs to a set comprising: a local memory (93 or 94), for example, a video memory or RAM (or random access memory), a flash memory, a ROM (or read-only memory ), a hard disk; a storage interface (95), for example, an interface with mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic medium; a communication interface (96), for example, a fixed network interface (for example, a bus interface, a wide area network interface, a local area network interface) or a wireless interface (such as a IEEE 802.11 interface or a Bluetooth® interface); and a user interface, such as a graphical user interface that allows the user to enter data.
[0100] According to examples of decoding or decoder (s), the first, second and / or third information is sent to a destination; specifically, the destination belongs to a set comprising: a local memory (93 or 94), for example, a video memory or a RAM, a flash memory, a hard drive; a storage interface (95), for example, an interface with mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic medium; and a communication interface (96), for example, a fixed network interface (for example, a bus interface (for example, USB (or Universal Serial Bus)), a wide area network interface, a network interface local area, an HDMI (High Definition Multimedia Interface) interface or a wireless interface (such as an IEEE 802.11, WiFi® interface or a Bluetooth® interface).
[0101] According to coding or encoder examples, a bit stream comprising data representative of the object is sent to a destination. As an example, the bit stream is stored in a local or remote memory, for example, a video memory (94) or a RAM (94), a hard disk (93). In a variant, the bit stream is sent to a storage interface (95), for example, an interface with mass storage, a flash memory, ROM, an optical disc or a magnetic medium and / or transmitted through a communication interface (96), for example, an interface for a point-to-point link, a communication bus, a point-to-multipoint link or a transmission network.
[0102] According to examples of decoding or decoding or rendering, the bit stream is obtained from a source. For example, the bit stream is read from a local memory, for example, a video memory (94), a RAM (94), a ROM (93), a flash memory (93) or a hard drive (93 ). in a variant, the bit stream is received from a storage interface (95), for example, an interface with mass storage, a RAM, a ROM, a flash memory, an optical disk or a magnetic medium and / or received from a communication interface (95), for example, an interface for a point-to-point link, a bus, a point-to-multipoint link or a transmission network.
[0103] According to the examples, device 9 is configured to implement a method described in relation to Figure 12, and belongs to a set comprising: a mobile device; a communication device; a gaming device; a tablet (or tablet computer); a portable computer; a still image camera; a video camera; an encoding chip; a server (for example, a broadcast server, a video on demand server or a web server).
[0104] According to the examples, device 9 is configured to implement a rendering method described in relation to Figure 13, and belongs to a set comprising: a mobile device; a communication device; a gaming device; a converter; a television set; a tablet (or tablet computer); a portable computer; and a screen (such as an HMD, for example).
[0105] According to an example illustrated in Figure 10, in a transmission context between two remote devices 1001 and 1002 (of the device type 9) in a NET 1000 communication network, the device 1001 comprises means that are configured to implement a method of generating a flow as described with reference to Figure 12, and device 1002 comprises means that are configured to implement a method for rendering an image as described with reference to Figure 13.
[0106] According to an example, network 1000 is a LAN or WLAN network, adapted to transmit still images or video images with associated audio information from device 1001 to decoding / rendering devices, including device 1002.
[0107] Figure 11 illustrates an example of a signal syntax modality when data is transmitted using a packet-based transmission protocol. Figure 11 illustrates an exemplary 1100 structure of an immersive video stream. The structure consists of a container that organizes the flow into independent syntax elements. The structure may comprise a header part 1101 which is a data set common to all syntax elements of the stream. For example, the header part contains metadata about syntax elements, describing the nature and role of each one. The structure can comprise a payload comprising elements of syntax 1102, 1103, 1104 and 1105, the first element of syntax 1102 being relative to the parameters that define the parametric surface, the second element of syntax being relative to the height map associated with the surface parametric, the third element of syntax being relative to one or more texture maps associated with the parametric surface and the fourth element of syntax being related to position information of the acquisition device.
[0108] Naturally, the present invention is not limited to the previously described embodiments.
[0109] In particular, the present invention is not limited to a method and device for generating a stream, but also extends to a method for encoding / decoding a package comprising data representative of an object in a scene and any device implementing this method and notably any devices comprising at least one CPU and / or at least one GPU.
[0110] The present invention also relates to a method (and a configured device) for displaying rendered images from the data stream comprising information representative of the scene object and to a method (and a configured device) for rendering and display the object with a flat video.
[0111] The present invention also relates to a method (and a configured device) for transmitting and / or receiving the flow.
[0112] The implementations described here can be implemented, for example, in a method or a process, an apparatus, a computer program product, a data flow or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of the features discussed can also be implemented in other ways (for example, a program). A device can be implemented, for example, in appropriate hardware, software and firmware. The methods can be implemented, for example, in an apparatus, such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit or a logic device programmable. Processors also include communication devices, such as, for example, smartphones, tablets, computers, mobile phones, portable / personal digital assistants (PDAs) and other devices that facilitate the communication of information between end users.
[0113] Implementations of the various processes and features described herein can be incorporated into a variety of different equipment or applications, particularly, for example, equipment or applications associated with data coding, data decoding, view generation, texture processing and other image processing and texture information and / or related depth information. Examples of such equipment include an encoder, a decoder, a post-processor output from a decoder, a pre-processor that provides input to an encoder, a video encoder, a video decoder, a video codec, a web server, converter, laptop, personal computer, cell phone, PDA and other communication devices. As it should be clear, the equipment can be mobile and even installed in a mobile vehicle.
[0114] In addition, methods can be implemented by instructions executed by a processor, and such instructions (and / or data values produced by an implementation) can be stored in a processor-readable medium, such as, for example, a circuit software support or other storage device, such as, for example, a hard disk, a compact disc (“CD”), an optical disc (such as, for example, a DVD, generally referred to as a versatile disc digital video disk), a random access memory (“RAM”), or a read-only memory (“ROM”). The instructions can form an application program tangentially embedded in a medium readable by the processor. The instructions can be, for example, in hardware, firmware, software or a combination. The instructions can be found, for example, in an operating system, a separate application, or a combination of the two. A processor can therefore be characterized as, for example, a device configured to perform a process or a device that includes a processor-readable medium (such as a storage device) with instructions for carrying out a process. In addition, a processor-readable medium can store, in addition to or instead of instructions, data values produced by an implementation.
[0115] As will be evident to a person skilled in the art, implementations can produce a variety of signals formatted to carry information that can, for example, be stored or transmitted. The information may include, for example, instructions for executing a method, or data produced by one of the described implementations. For example, a signal can be formatted to carry as data the rules for recording or reading the syntax of a described mode, or to carry as data the actual syntax values recorded by a described mode. Such a signal can be formatted, for example, as an electromagnetic wave (for example, using a portion of radio frequency spectrum) as a baseband signal. Formatting can include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries can be, for example, analog or digital information. The signal can be transmitted through a variety of different wired or wireless connections, as it is known. The signal can be stored in a readable medium per processor.
[0116] Several implementations have been described. However, it should be understood that several modifications can be made. For example, elements from different implementations can be combined, supplemented, modified or removed to produce other implementations. In addition, an ordinarily knowledgeable person will understand that other structures and processes can be replaced by those disclosed and the resulting implementations will perform at least substantially the same function (s), at least substantially of the same (s) ) form (s), to achieve at least substantially the same result (s) as the disclosed implementations. Therefore, these and other implementations are covered by this request.
权利要求:
Claims (15)
[1]
1. Method of generating a flow from at least one image of an object in a scene, CHARACTERIZED by the fact that it comprises: - obtaining (1201) data associated with points in a point cloud representing at least part of that object; - obtaining (1202) parameters representative of a parametric surface according to at least one geometric characteristic associated with said at least part of the object and position information of an acquisition device used to acquire said at least one image, said at least one geometric feature being obtained from said data associated with points of said cloud of points associated with said at least a part of the object; - obtaining (1203) a height map associated with said parametric surface from said data, said height map comprising information representative of the distance between said at least a part of the object and said parametric surface; - obtaining (1204) at least one color map associated with said parametric surface from color information associated with points of said cloud of points associated with said at least a part of the object; - generate (1205) said flow by combining a first element of syntax relating to the parameters, a second element of syntax relating to the height map, a third element of syntax relating to at least one color map and a fourth element of syntax concerning a position of said acquisition device.
[2]
2. Method, according to claim 1, CHARACTERIZED by the fact that said cloud of points represents said object as seen from a range of points of view.
[3]
3. Method, according to claim 1 or 2, CHARACTERIZED by the fact that said parameters vary in time according to a deformation of said at least a part of the object.
[4]
4. Device configured to generate a stream from at least one image of an object in a scene, CHARACTERIZED by the fact that it comprises a memory associated with at least one processor configured to: - obtain data associated with points from a point cloud representing at least a part of said object; - obtaining parameters representative of a parametric surface according to at least one geometric feature associated with said at least a part of the object and position information of an acquisition device used to acquire said at least one image, said at least one geometric characteristic being obtained from said data associated with points of said cloud of points associated with said at least a part of the object; - obtaining a height map associated with said parametric surface from said data, said height map comprising information representative of the distance between said at least a part of the object and said parametric surface; - obtaining at least one color map associated with said parametric surface from color information associated with points of said cloud of points associated with said at least a part of the object; - generate said flow by combining a first element of syntax related to the parameters, a second element of syntax related to the height map, a third element of syntax related to at least one color map and a fourth element of syntax related to a position of said acquisition device.
[5]
5. Method, according to claim 4, CHARACTERIZED by the fact that said point cloud represents said object as seen from a range of points of view.
[6]
6. Device, according to claim 4 or 5, CHARACTERIZED by the fact that said at least one parameter varies in time according to a deformation of said at least a part of the object.
[7]
7. Flow carrying first data representative of an object of a scene, CHARACTERIZED by the fact that the data comprises: - a first element of syntax (1102) relative to parameters representative of a parametric surface obtained according to at least one associated geometric characteristic with said at least a part of the object and position information of an acquisition device used to acquire said at least one image, said at least one geometric feature being obtained from said data associated with points of said cloud of points associated with said at least a part of the object; - a second syntax element (1103) relating to a height map obtained from second data associated with points in a point cloud representing said at least part of the object, the height map comprising information representative of the distance between the said at least a part of the object and said parametric surface; - a third syntax element (1104) relating to at least one color map obtained from said color information associated with points of said cloud of points associated with said at least a part of the object; and - a fourth syntax element (1105) relative to a position of said acquisition device.
[8]
8. Flow, according to claim 7, CHARACTERIZED by the fact that the said cloud of points represents said object as seen from a range of points of view.
[9]
9. Flow, according to claim 7 or 8, CHARACTERIZED by the fact that said first syntax element (1102) varies in time according to a change in said at least one parameter that varies according to a deformation of said at least a part of the object.
[10]
10. Method of rendering an image of at least part of an object from a stream carrying data representative of that object, CHARACTERIZED by the fact that it comprises: - obtaining (1301) parameters representative of a parametric surface from a first flow syntax element; - obtaining (1302) a height map from a second flow syntax element, the height map comprising information representative of the distance between said at least a part of the object and said parametric surface; - obtaining (1303) at least one color map from a third element of the flow syntax; - obtaining (1304) data associated with points in a point cloud representing said at least a part of the object from said parametric surface, said height map and said at least one color map; - rendering (1305) said image based on said data and information representing a position of an acquisition device from a fourth flow syntax element.
[11]
11. Method, according to claim 10, CHARACTERIZED by the fact that said cloud of points represents said object as seen from a range of points of view.
[12]
12. Method, according to claim 10 or 11, CHARACTERIZED by the fact that the rendering comprises the rendering by points of said data.
[13]
13. Device configured to render an image of at least part of an object from a stream carrying data representative of that object, CHARACTERIZED by the fact that it comprises a memory associated with at least one processor configured to: - obtain representative parameters of a parametric surface from a first flow syntax element; - obtaining a height map from a second flow syntax element, the height map comprising information representative of the distance between said at least a part of the object and said parametric surface; - obtain at least one color map from a third element of the flow syntax; - obtaining data associated with points in a point cloud representing said at least a part of the object from said parametric surface, said height map and said at least one color map; - rendering said image based on said data and information representative of a position of an acquisition device from a fourth element of flow syntax.
[14]
14. Device, according to claim 13, CHARACTERIZED by the fact that said cloud of points represents said object as seen from a range of points of view.
[15]
15. Device, according to claim 13 or 14, CHARACTERIZED by the fact that at least one processor is still configured to perform point rendering of said data to render said image.
类似技术:
公开号 | 公开日 | 专利标题
BR102017010904A2|2018-05-15|METHOD, APPARATUS AND FLOW FOR IMMERSIVE VIDEO FORMAT
US10891784B2|2021-01-12|Method, apparatus and stream for immersive video format
US10757423B2|2020-08-25|Apparatus and methods for compressing video content using adaptive projection selection
JP2019534500A|2019-11-28|Method, apparatus and system for an immersive video format
KR20200083616A|2020-07-08|Methods, devices and streams for encoding/decoding volumetric video
JP2018139102A|2018-09-06|Method and apparatus for determining interested spot in immersive content
EP3562159A1|2019-10-30|Method, apparatus and stream for volumetric video format
US20190268584A1|2019-08-29|Methods, devices and stream to provide indication of mapping of omnidirectional images
JP2021502033A|2021-01-21|How to encode / decode volumetric video, equipment, and streams
US11178383B2|2021-11-16|Method, apparatus and stream for volumetric video format
KR20220035229A|2022-03-21|Method and apparatus for delivering volumetric video content
US20210074025A1|2021-03-11|A method and apparatus for encoding and decoding three-dimensional scenes in and from a data stream
EP3310053A1|2018-04-18|Method and apparatus for coding transparency information of immersive video format
EP3310057A1|2018-04-18|Method, apparatus and stream for coding transparency and shadow information of immersive video format
同族专利:
公开号 | 公开日
MX2017006720A|2018-08-28|
RU2017117733A3|2020-06-09|
JP2018010622A|2018-01-18|
EP3249922A1|2017-11-29|
EP3249921A1|2017-11-29|
CA2967969A1|2017-11-24|
CN107426559A|2017-12-01|
US20170347055A1|2017-11-30|
KR20170132669A|2017-12-04|
RU2017117733A|2018-11-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

KR20020059078A|2000-12-30|2002-07-12|윤세영|Method for establishing 3d-character having multi-layer and computer record medium including a program regarding the same|
JP4651435B2|2005-03-31|2011-03-16|株式会社バンダイナムコゲームス|Program, information storage medium, and image generation system|
EP2058765A1|2007-11-07|2009-05-13|3D Geo GmbH|Method and device for texturizing an object of a virtual three-dimensional geometrical model|
WO2012037157A2|2010-09-13|2012-03-22|Alt Software Llc|System and method for displaying data having spatial coordinates|
US20130095920A1|2011-10-13|2013-04-18|Microsoft Corporation|Generating free viewpoint video using stereo imaging|
US9171394B2|2012-07-19|2015-10-27|Nvidia Corporation|Light transport consistent scene simplification within graphics display system|
US20140172377A1|2012-09-20|2014-06-19|Brown University|Method to reconstruct a surface from oriented 3-d points|
JP2016510473A|2013-02-12|2016-04-07|トムソン ライセンシングThomson Licensing|Method and device for enhancing depth map content|
US9269187B2|2013-03-20|2016-02-23|Siemens Product Lifecycle Management Software Inc.|Image-based 3D panorama|
CN103500463B|2013-10-17|2016-04-27|北京大学|The method for visualizing that on a kind of GPU, multilayer shape facility merges|
EP2881918B1|2013-12-06|2018-02-07|My Virtual Reality Software AS|Method for visualizing three-dimensional data|
CN103914871B|2014-03-06|2016-06-08|河南农业大学|The method of body surface coordinate point is chosen based on the interactive mode of cloud data|
GB201414144D0|2014-08-08|2014-09-24|Imagination Tech Ltd|Relightable texture for use in rendering an image|US10279928B2|2015-08-26|2019-05-07|The Boeing Company|Delta offset based surface modeling|
EP3624059A4|2017-05-10|2020-05-27|Fujitsu Limited|Target object recognition method, device, system, and program|
JP6947215B2|2017-08-07|2021-10-13|富士通株式会社|Information processing device, model data creation program, model data creation method|
US10482628B2|2017-09-30|2019-11-19|United States Of America As Represented By The Secretary Of The Army|Photogrammetric point cloud compression for tactical networks|
EP3496388A1|2017-12-05|2019-06-12|Thomson Licensing|A method and apparatus for encoding a point cloud representing three-dimensional objects|
EP3515068A1|2018-01-19|2019-07-24|Thomson Licensing|A method and apparatus for encoding and decoding three-dimensional scenes in and from a data stream|
EP3515066A1|2018-01-19|2019-07-24|Thomson Licensing|A method and apparatus for encoding and decoding three-dimensional scenes in and from a data stream|
EP3515067A1|2018-01-19|2019-07-24|Thomson Licensing|A method and apparatus for encoding and decoding three-dimensional scenes in and from a data stream|
CN108335335B|2018-02-11|2019-06-21|北京大学深圳研究生院|A kind of point cloud genera compression method based on enhancing figure transformation|
EP3562159A1|2018-04-24|2019-10-30|InterDigital VC Holdings, Inc.|Method, apparatus and stream for volumetric video format|
EP3804319A4|2018-06-26|2021-06-30|Huawei Technologies Co., Ltd.|High-level syntax designs for point cloud coding|
US10930049B2|2018-08-27|2021-02-23|Apple Inc.|Rendering virtual objects with realistic surface properties that match the environment|
CN110971912A|2018-09-30|2020-04-07|华为技术有限公司|Point cloud coding and decoding method and coder-decoder|
法律状态:
2018-05-15| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
2019-07-16| B25G| Requested change of headquarter approved|Owner name: THOMSON LICENSING (FR) |
2019-07-30| B25A| Requested transfer of rights approved|Owner name: INTERDIGITAL VC HOLDINGS, INC. (US) |
优先权:
申请号 | 申请日 | 专利标题
EP16305600.5A|EP3249921A1|2016-05-24|2016-05-24|Method, apparatus and stream for immersive video format|
EP16305600.5|2016-05-24|
[返回顶部]